5 research outputs found

    Harmonize: a shared environment for extended immersive entertainment

    Get PDF
    Virtual reality (VR) and augmented reality (AR) applications are very diļ¬€use nowadays. Moreover, recent technology innovations led to the diļ¬€usion of commercial head-mounted displays (HMDs) for immersive VR: users can enjoy entertainment activities that ļ¬ll their visual ļ¬elds, experiencing the sensation of physical presence in these virtual immersive environments (IEs). Even if AR and VR are mostly used separately, they can be eļ¬€ectively combined to provide a multi-user shared environment (SE), where two or more users perform some speciļ¬c tasks in a cooperative or competitive way, providing a wider set of interactions and use cases compared to immersive VR alone. However, due to the diļ¬€erences between the two technologies, it is diļ¬ƒcult to develop SEs oļ¬€ering a similar experience for both AR and VR users. This paper presents Harmonize, a novel framework to deploy applications based on SEs with a comparable experience for both AR and VR users. Moreover, the framework is hardware-independent and it has been designed to be as much extendable to novel hardware as possible. An immersive game has been designed to test and to evaluate the validity of the proposed framework. The assessment of the system through the System Usability Scale (SUS) questionnaire and the Game Experience Questionnaire (GEQ) shows a positive evaluation

    3D SCENE RECONSTRUCTION SYSTEM BASED ON A MOBILE DEVICE

    Get PDF
    Augmented reality (AR) and virtual reality (VR) applications can take advantage of efficient digitalization of real objects as reconstructed elements can allow users a better connection between real and virtual worlds than using pre-set 3D CAD models. Technology advances contribute to the spread of AR and VR technologies, which are always more diffuse and popular. On the other hand, the design and implementation of virtual and extended worlds is still an open problem; affordable and robust solutions to support 3D object digitalization is still missing. This work proposes a reconstruction system that allows users to receive a 3D CAD model starting from a single image of the object to be digitalized and reconstructed. A smartphone can be used to take a photo of the object under analysis and a remote server performs the reconstruction process by exploiting a pipeline of three Deep Learning methods. Accuracy and robustness of the system have been assessed by several experiments and the main outcomes show how the proposed solution has a comparable accuracy (chamfer distance) with the state-of-the-art methods for 3D object reconstruction

    Snap2cad:Ā 3D indoor environment reconstruction for AR/VR applications using a smartphone device

    No full text
    Indoor environment reconstruction is a challenging task in Computer Vision and Computer Graphics, especially when Extended Reality (XR) technologies are considered. Current solutions that employ dedicated depth sensors require scanning of the environment and tend to suffer from low resolution and noise, whereas solutions that rely on a single photo of a scene cannot predict the actual position and scale of objects due to scale ambiguity. The proposed system addresses these limitations by allowing the user to capture single views of objects using an Android smartphone equipped with a single RGB camera and supported by Google ARCore. The system includes 1) an Android app tracking the smartphoneā€™s position relative to the world, capturing a single RGB image for each object and estimating depth information of the scene, 2) a program running on a server that classiļ¬es the framed objects, retrieves the corresponding 3D models from a database and estimates their position, vertical rotation, and scale factor without deforming the shape. The system has been assessed measuring the translational, rotational and scaling errors of the considered objects with respect to the physical ones acting as a ground truth. The main outcomes show that the proposed solution obtains a maximum error of 18% for the scaling factor, less than nine centimeters for the position and less than 18Ā° for the rotation. These results suggest that the proposed system can be employed for XR applications, thus bridging the gap between the real and virtual worlds

    A single RGB image based 3D object reconstruction system

    No full text
    Easy and fast digitalization of real objects is especially useful when considering augmented reality (AR) and virtual reality (VR), as reconstructed objects allow a better interaction between the real and virtual worlds than using pre-madeĀ 3D CAD models. Thanks to the ubiquity of smartphones and to the spread of immersive VR devices, the AR and VR technologies are rapidly becoming popular. However, an affordable, robust and easy to use solution for object digitalization is still missing. This paper presents a reconstruction system that allows users to convert a single photo of a real object into a digitalĀ 3D asset. A smartphone is used to capture a snapshot of the object, whereas a secondary computing device performs the reconstruction process by exploiting a pipeline of three Deep Learning methods. Several experiments have been conducted in order to assess the accuracy and robustness of the system by using a standard metric for measuring the reconstruction accuracy (chamfer distance). The main outcomes show that the proposed system has a comparable accuracy with respect to the state-of-the-art methods forĀ 3D object reconstruction

    Machine Learning and Digital Twin for Production Line Simulation: A Real Use Case

    Get PDF
    The advent of Industry 4.0 has boosted the usage of innovative technologies to promote the digital transformation of manufacturing realities, especially exploiting the possibilities offered by cyber physical systems and virtual environments (VEs). Digital Twins (DTs) have been widely adopted to virtually reproduce the physical world for training activities and simulations, and today they can also leverage on the integration of Machine Learning (ML), which is considered a relevant technology for industry 4.0. This paper investigates the usage of a combination of DT and ML technologies in the context of a real production environment, specifically on the creation of a DT enhanced with YOLO (You only look once), a state-of-the-art, real-time object detection algorithm. The ML system has been trained with synthetic data automatically generated and labelled and its performance enables its usage in the VE for real-time users training
    corecore